-
Notifications
You must be signed in to change notification settings - Fork 898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add unique indexes for containers tables #15308
[WIP] Add unique indexes for containers tables #15308
Conversation
Add :deleted to the disconnectable Containers tables, which will allow us to mark soft deleted records.
Change disconnected Containers records to soft deleted, reconnecting the old_ems_id if possible.
Specs for changing disconnected Containers records to soft deleted
Add partial index to :deleted col to speedup queries. It has to be a partial index, otherwise the cost is too high and PG will not pick it.
Fix the comment :deleted => true goes to :ems_id => nil
Use only :deleted_on column for soft_delete, where :deleted_on => nil marks not deleted record.
Scope ems relations to show only :deleted => false, which mimics the missing :ems_id foreign key we had before, doing disconnect ems.
Do soft delete using :deleted => true instead of doing ems disconnect by setting :ems_id => nil
Add scopes for deleted not deleted to the archived mixin, we should always use a scope, rather than a specific query.
Alias archived? to deleted?
Use :active, :archived scopes and methods for soft-delete
Use only :deleted_on for a soft-delete
Move dependent destroy to _all relations, since we want to cascade deleted both active and archived records.
Remove duplicate :container_definitions relation, we already have it defined using :ems_id foreign key
@cben 23d5f7f ok, so this is a first draft of unique indexes, can you check it out? i have few FIXMEs there and questions Also I need to know what is important to reconnect, now I just delete the duplicates. But i think we should e.g. take EmsEvents and Metrics tied to the duplicate and change the foreign key to the first original? Or we can ignore that, if we are sure that an occurrence of the duplicate is really rare. I think it will be really rare with the exception of the ContainerImage, because of the broken reconnect we had for a while. |
23d5f7f
to
46ec716
Compare
Checked commits Ladas/manageiq@c446182~...46ec716 with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0 db/migrate/20170530102536_use_deleted_on_in_containers_tables.rb
spec/migrations/20170530102630_clean_up_duplicates_in_containers_tables_spec.rb
|
end | ||
|
||
def cleanup_duplicate_data_delete_all(model, unique_index_columns) | ||
model.where.not(:id => duplicate_data_query_returning_min_id(model, unique_index_columns)).delete_all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kbrock hum, seems like the group_by query on the Unique Key candidates takes like 10 minute for 130k records, I think mainly because those are not indexed. :-)
So should I first add normal indexes, run clean up, remove indexes and add unique indexes instead of them? Or we'll just wait? Or should we do the cleanup as a rake script, that needs to be run when failing to add Unique DB index?
Database migrations have now been moved to the https://github.com/ManageIQ/manageiq-schema repo. Please see http://talk.manageiq.org/t/new-split-repo-manageiq-schema/2478 for instructions on how to transfer your database migrations. If this PR contains only migrations, I will leave it open for a short time during the transition, after which I will close this if it has not been moved over. |
moved to ManageIQ/manageiq-schema#19 |
Depends on
Only 23d5f7f is relevant